我们提出了一个新颖的范式,该范式是通过单眼视频输入来构建可动画的3D人类代表,以便可以以任何看不见的姿势和观点呈现。我们的方法基于由基于网格的参数3D人类模型操纵的动态神经辐射场(NERF),该模型用作几何代理。以前的方法通常依靠多视频视频或准确的3D几何信息作为其他输入;此外,大多数方法在概括地看不见的姿势时会降解质量。我们确定概括的关键是查询动态NERF的良好输入嵌入:良好的输入嵌入应定义完整量化空间中的注入映射,并在姿势变化下表面网格变形引导。基于此观察结果,我们建议将输入查询嵌入其与局部表面区域的关系,并在网格顶点上跨越一组地球的最近邻居跨越。通过包括位置和相对距离信息,我们的嵌入式定义了距离保存的变形映射,并可以很好地概括为看不见的姿势。为了减少对其他输入的依赖性,我们首先使用现成的工具初始化人均3D网格,然后提出一条管道以共同优化NERF并完善初始网格。广泛的实验表明,我们的方法可以在看不见的姿势和观点下合成合理的人类渲染结果。
translated by 谷歌翻译
视频实例细分(VIS)旨在在视频序列中对对象实例进行分类,分割和跟踪。最近基于变压器的神经网络证明了它们为VIS任务建模时空相关性的强大能力。依靠视频或剪辑级输入,它们的潜伏期和计算成本很高。我们提出了一个强大的上下文融合网络来以在线方式解决VIS,该网络可以预测实例通过前几个框架进行逐帧的细分框架。为了有效地获取每个帧的精确和时间一致的预测,关键思想是将有效和紧凑的上下文从参考框架融合到目标框架中。考虑到参考和目标框架对目标预测的不同影响,我们首先通过重要性感知的压缩总结上下文特征。采用变压器编码器来融合压缩上下文。然后,我们利用嵌入订单的实例来传达身份感知信息,并将身份与预测的实例掩码相对应。我们证明,我们强大的融合网络在现有的在线VIS方法中取得了最佳性能,并且比以前在YouTube-VIS 2019和2021基准上发布的剪辑级方法更好。此外,视觉对象通常具有声学签名,这些签名自然与它们在录音录像中自然同步。通过利用我们的上下文融合网络在多模式数据上的灵活性,我们进一步研究了音频对视频密集预测任务的影响,这在现有作品中从未讨论过。我们建立了一个视听实例分割数据集,并证明野外场景中的声学信号可以使VIS任务受益。
translated by 谷歌翻译
引用视频对象分割(R-VOS)旨在分割视频中的对象掩码,并给出将语言表达式转介到对象的情况下。这是最近引入的任务,吸引了不断增长的研究关注。但是,所有现有的作品都有很大的假设:表达式所描绘的对象必须存在于视频中,即表达式和视频必须具有对象级的语义共识。在现实世界中,通常会违反这种表达式的虚假视频,并且由于滥用假设,现有方法总是在此类错误查询中失败。在这项工作中,我们强调研究语义共识对于提高R-VOS的鲁棒性是必要的。因此,我们从没有语义共识假设的R-VOS构成了一个扩展任务,称为Robust R-VOS($ \ Mathrm {R}^2 $ -VOS)。 $ \ mathrm {r}^2 $ - VOS任务与主R-VOS任务的联合建模及其双重问题(文本重建)基本相关。我们接受这样的观察,即嵌入空间通过文本视频文本转换的周期具有关系一致性,该转换将主要问题和双重问题连接起来。我们利用周期一致性来区分语义共识,从而推进主要任务。通过引入早期接地介质,可以实现对主要问题和双重问题的平行优化。收集了一个新的评估数据集,$ \ mathrm {r}^2 $ -Youtube-vos,以测量R-VOS模型针对未配对的视频和表达式的稳健性。广泛的实验表明,我们的方法不仅可以识别出无关表达式和视频的负面对,而且还提高了具有出色歧义能力的正对的分割精度。我们的模型在Ref-Davis17,Ref-Youtube-Vos和Novel $ \ Mathrm {r}^2 $ -Youtube-vos数据集上实现了最先进的性能。
translated by 谷歌翻译
在蓬勃发展的视频时代,视频细分吸引了多媒体社区的越来越多的研究关注。半监督视频对象细分(VOS)旨在分割视频的所有目标框架中的对象,并给定带注释的参考帧掩码。大多数现有方法构建像素参考目标相关性,然后执行像素跟踪以获得目标掩码。由于忽略对象级别的提示,像素级方法使跟踪容易受到扰动的影响,甚至在相似对象之间进行了不加区分。朝向强大的VOS,关键见解是校准每个特定对象的表示和掩盖,以表达和歧视。因此,我们提出了一个新的深层网络,该网络可以自适应地构建对象表示并校准对象掩盖以实现更强的鲁棒性。首先,我们通过应用自适应对象代理(AOP)聚合方法来构建对象表示,其中代理代表在多级别上的任意形状段以供参考。然后,原型掩码最初是从基于AOP的参考目标相关性生成的。之后,通过网络调制进一步校准此类原始掩码,并根据对象代理表示条件。我们以渐进的方式巩固了此条件掩盖校准过程,其中对象表示和原始遮罩会演变为歧视性迭代。广泛的实验是在标准VOS基准,YouTube-VOS-18/19和Davis-17上进行的。我们的模型在现有已发表的作品中实现了最新的表现,并且还表现出对扰动的卓越鲁棒性。我们的项目回购位于https://github.com/jerryx1110/robust-video-object-ementation
translated by 谷歌翻译
错误传播是在线半监控视频对象分段中的一般但重要的问题。我们的目标是通过具有高可靠性的校正机制来抑制误差传播。关键洞察力是用可靠的线索解开传统掩模传播过程的校正。我们介绍了两个调制器,传播和校正调制器,根据本地时间相关性和可靠的引用,在目标帧嵌入中分别对目标帧嵌入进行分别执行频道 - WIES重新校准。具体地,我们用级联的传播校正方案组装调制器。这避免了通过传播调制器来覆盖可靠校正调制器的效果。尽管具有地面真理标签的参考帧提供可靠的提示,但它可能与目标帧非常不同,并引入不确定或不完全相关的相关性。我们通过向维护池补充可靠的功能补丁来增强参考线索,从而为调制器提供更全面和表现力的对象表示。此外,可靠性滤波器设计成检索可靠的贴片并将其传递在后续帧中。我们的模型在YouTube-VOS18 / 19和Davis17-Val /测试基准上实现了最先进的性能。广泛的实验表明,通过充分利用可靠的指导,校正机制提供了相当大的性能增益。代码可用:https://github.com/jerryx1110/rpcmvos。
translated by 谷歌翻译
最近,基于变压器的图像分割方法对先前的解决方案取得了显着的成功。虽然对于视频域,如何有效地模拟时间上下文,以跨越帧的对象实例的注意仍然是一个打开问题。在本文中,我们提出了一种具有新颖的实例感知时间融合方法的在线视频实例分段框架。我们首先利用表示,即全局上下文(实例代码)和CNN特征映射中的潜在代码来表示实例和像素级别功能。基于此表示,我们介绍了一种无裁剪的时间融合方法来模拟视频帧之间的时间一致性。具体地,我们在实例代码中编码全局实例特定信息,并在实例代码和CNN特征映射之间构建与混合关注的帧间上下文融合。使用订单约束进一步强制执行实例代码之间的帧间一致性。通过利用学习的混合时间一致性,我们能够直接检索和维护帧中的实例标识,从而消除了先前方法中的复杂帧实例匹配。已经在流行的VIS数据集中进行了广泛的实验,即YouTube-Vis-19/21。我们的模式实现了所有在线VIS方法中的最佳性能。值得注意的是,我们的模型也在使用Reset-50骨干时eClipses所有脱机方法。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译